Images

Introduction

Images are simply n-dimensional regular arrays of pixels where n >= 2. Each pixel has k-channels of information, and, once instantiated, for all images k >= 1. All pixels and all channels are of the same data type, but there is no restriction in general on what that data type is.

Although there are many Image class specializations, all Images are instances of AbstractNDImage, and the vast majority of the functionality is shared by all images. We use the syntax

(i, j, ...., l, [k])

To declare an image shape. The i is the size of the first image dimension, j the second. This image is of n dimensions - the final spatial dimension being of size l. The [] sybolizes that this is the channel axis - this image has k channels. A few examples for clarity:

  • low-resolution greyscale image: (320, 240, [1])
  • high-resolution RGB image: (1700, 1650, [3])
  • intensity voxel image: (1024, 1024, 1024, [1])

commonly the channel has an explicit meaning - these are symbolized by a <>. For example:

  • low-resolution greyscale image: (320, 240, <I>)
  • high-resolution RGB image: (1700, 1650, <R, G, B>)
  • depth image: (1024, 768, <Z>)

We now use this notation to explain all of the image classes. As a final note - some classes are fixed to have only one channel. The constructors for these images don't expect you to pass a numpy array in with a dead axis on the end all the time. To signify this, the channel signature includes an exclamation mark to show it is implictly generated for you, for example

  • depth image: (1024, 768, !<Z>!)

To aid with the explanations, lets import the good old Takeo and Lenna images. import_builtin_asset(asset_name) allows us to quickly grab a few builtin images


In [ ]:
%matplotlib inline
import numpy as np
import menpo.io as pio
lenna = pio.import_builtin_asset('lenna.png')
takeo_rgb = pio.import_builtin_asset('takeo.ppm')
# Takeo is RGB with repeated channels - convert to greyscale
takeo = takeo_rgb.as_greyscale(mode='average')

In [ ]:
print 'Lenna is a {}'.format(type(lenna))
print 'Takeo is a {}'.format(type(takeo))

AbstractNDImage

(i, j, ...., n, [k])

All images are Image instance, and a large bulk of functionality can be explored in this one class.


In [ ]:
from menpo.image import Image
print "Lenna 'isa' Image: {}".format(isinstance(lenna, Image))
print "Takeo 'isa' Image: {}".format(isinstance(lenna, Image))

pixels

  • The actual data of the image is stored on the self.pixels property. self.pixels[..., -1] is refered to as the channel axis - it is always present on an instantiated subclass of Image (even if for instance we know the number of channels to always be 1)

In [ ]:
print 'Takeo shape:{}'.format(takeo.pixels.shape)  # takeo is an RGB image even though all the channels are the same
print 'The number of channels in Takeo is {}'.format(takeo.pixels.shape[-1])
print "But this the right way to find out is with the 'n_channels' property: {}".format(takeo.n_channels)
print 'n_channels for Lenna is {}'.format(lenna.n_channels)

shape

  • The self.shape of the image is the spatial dimension of the array- that's (i, j, ..., n)

In [ ]:
print 'Takeo has a shape of {}'.format(takeo.shape)
print 'Lenna has a shape of {}'.format(lenna.shape)

width and height

  • annoyingly, images have a very a peculiar format. The 0'th axis of pixels is the 'height' or 'y' axis, and it starts at the top of the image and runs down. The 1'st axis is the 'width' or 'x' axis - it starts from the left of the image and runs across.

Most of the time worrying about this will lead you into hot water - it's a lot better to not get bogged down in the terminology and just consider the image as an array, just like all our other data. As a result, all our algorithms, such as gradient, will be ordered by axis 0,1,...,n not x,y, z (as this would be axis 1,0,2). The self.shape we printed above was the shape of the underlying array, and so was semantically (height, width). You can use the self.width and self.height properties to check this for yourself if you ever get confused though


In [ ]:
print 'Takeos arrangement in memory (for maths) is {}'.format(takeo.shape)
print 'Semantically, Takeo has W:{} H:{}'.format(takeo.width, takeo.height)
print takeo  # shows the common semantic labels

centre

  • returns the gemetric centre of the image, in axis ordering

In [ ]:
# note that this is (axis0, axis1), which is (height, width) or (Y, X)!
print 'the centre of Takeo is {}'.format(takeo.centre)

counts

  • Image metrics are directly accessable as properties. Note that n_pixels is channel independent - to find the total size of the array (including channels) use n_elements.

In [ ]:
print 'Lenna n_dims : {}'.format(lenna.n_dims)
print 'Lenna n_channels : {}'.format(lenna.n_channels)
print 'Lenna n_pixels : {}'.format(lenna.n_pixels)
print 'Lenna n_elements: {}'.format(lenna.n_elements)

view

  • As you'd expect, all images are viewable.

In [ ]:
takeo.view()

In [ ]:
lenna.view()

you can pass the channel=x to inspect a single channel of the image


In [ ]:
# viewing Lenna's green channel...
lenna.view(channels=0)

crop

  • All images are cropable. There are two core crop methods: crop(), which is inplace, and cropped_copy() which returns the cropped image without damaging the instance it is called on. Both execute identical code paths. To crop we provide the minimum values per dimension where we want the crop to start, and the maximum values where we want the crop to end. For example, to crop Takeo from the centre down to the bottom corner, we could do

In [ ]:
takeo_cropped = takeo.cropped_copy(takeo.centre, np.array(takeo.shape))
takeo_cropped.view()

rescale

  • All images are rescalable. Simply choose the scale factor you wish to apply. For instance, to make Lena twice as big

In [ ]:
lenna_double = lenna.rescale(2.0)
print lenna_double

landmark support

  • All Images are landmarkable. Let's import an image that has landmarks attached

In [ ]:
breakingbad = pio.import_builtin_asset('breakingbad.jpg')
breakingbad.landmarks['PTS'].view()

In [ ]:
breakingbad.landmarks['PTS']

it can sometimes be useful to constrain an image to be bound around it's landmarks. A convienience method exists to do just this


In [ ]:
breakingbad.crop_to_landmarks(boundary=20)
# note that this method is smart enough to not stray outside the boundary of the image
breakingbad.landmarks['PTS'].view()

BooleanNDImage

(i, j, ...., n, !<B>!)

The first concrete Image subclass we will look at is BooleanNDImage. This is an n-dimensional image with a single channel per pixel. The datatype of this image is np.bool. First, remember that BooleanNDImage is a subclass of AbstractNDImage and so all of the above features apply again.


In [ ]:
from menpo.image import BooleanImage
random_seed = np.random.random(lenna.shape) # shape doesn't include channel - and that's what we want
random_mask = BooleanImage(random_seed > 0.5)
print "the mask's shape is as expected: {}".format(random_mask.shape)
print "the channel has been added to the mask's pixel's shape for us: {}".format(random_mask.pixels.shape)
random_mask.view()

Note that the constructor for the Boolean Image doesn't require you to pass in the redundant channel axis - it's created for you.

blank()

  • If you just want a quick all true or all false mask use the blank() method. You can rely on this existing on every concrete Image class.

In [ ]:
all_true_mask = BooleanImage.blank((120, 240))
all_false_mask = BooleanImage.blank((120, 240), fill=False)

metrics

  • It's trivial to find out how many True and False elements there are in the mask

In [ ]:
print 'n_pixels on random_mask: {}'.format(random_mask.n_pixels)
print 'n_true pixels on random_mask: {}'.format(random_mask.n_true)
print 'n_false pixels on random_mask: {}'.format(random_mask.n_false)
print 'proportion_true on random_mask: {:.3}'.format(random_mask.proportion_true)
print 'proportion_false on random_mask: {:.3}'.format(random_mask.proportion_false)

true_indices/false_indices

  • In addition, BooleanNDImage has functionality that aids in the use of the class as a mask to another image. The indices properties give you access to the coordinates of the True and False values as if the mask had been flattened.

In [ ]:
from copy import deepcopy
small_amount_true = deepcopy(all_false_mask)
small_amount_true.pixels[4, 8] = True
small_amount_true.pixels[15, 56] = True
small_amount_true.pixels[0, 4] = True
print small_amount_true.true_indices # note the ordering is incremental C ordered
print 'the shape of true indices: {}'.format(small_amount_true.true_indices.shape)
print 'the shape of false indices: {}'.format(small_amount_true.false_indices.shape)

all_indices

  • For completion, you can request the indices of the whole mask

In [ ]:
print 'the shape of all indices: {}'.format(small_amount_true.all_indices.shape)
# note that all_indices = true_indices + false_indices

mask

  • if you need to directly mask another image of the same size (with an arbitriy number of channels) use the mask property. This is used heavily in MaskedNDImage.

In [ ]:
lenna_masked_pixels_flatted = lenna.pixels[random_mask.mask]
lenna_masked_pixels_flatted.shape
# note we can only do this as random_mask is the shape of lenna
print 'is Lenna and random mask the same shape? {}'.format(lenna.shape == random_mask.shape)

print

  • often, you just want an overview of an image. Just print it to get a quick summary

In [ ]:
print random_mask
print lenna
print takeo

MaskedNDImage

(i, j, ...., n, [k])

Notice in the above print statements that lenna and takeo have attached masks. This is because all concrete image classes bar BooleanImage are instances of MaskedNDImage. This means this functionality is available to all images we deal with directly. Just like you would expect, MaskedNDImages have a mask attached to them which augments their usual behavior.

mask

  • all MaskedNDImages have a BooleanNDImage of appropriate size attached to them at the mask property. On construction, a mask can be specified at the mask kwarg (either a boolean ndarray or a BooleanNDImage instance). If nothing is provided, the mask is set to all true. A MaskedNDImage with an all true mask behaves exactly as an AbstractNDImage.

In [ ]:
# imported images have all-true masks on them to start with
print takeo.mask
takeo.mask.view()
print breakingbad.mask

constrain_mask_to_landmarks

  • this isn't actually available to all MaskedNDImages - but only to the subclass specializing in images, Abstract2DImage (which both RGBImage and IntensityImage are instances of). It allows us to update the mask to equal the convex hull around some landmarks on the image. You can choose a particular group of landmarks (e.g. PTS) and then a specific label (e.g. perimeter). By default, if neither are provided (and if their is only one landmark group) all the landmarks are used to form a convex hull.

In [ ]:
breakingbad.constrain_mask_to_landmarks()

In [ ]:
breakingbad.mask.view()

In [ ]:
breakingbad.landmarks.view()

view behavior

  • By default, only the masked part of a masked image is shown when viewing

In [ ]:
breakingbad.view()

use masked=False to see everything


In [ ]:
breakingbad.view(masked=False)

Thanks to kwarg chaining, this can even be used when viewing landmarks


In [ ]:
breakingbad.landmarks.view(channels=2, masked=False)

as_vector() / from_vector() behavior

  • as_vector() and from_vector() on MaskedNDImages only returns True mask values flattened.

In [ ]:
print 'breakingbad has {} pixels, but only {} are masked.'.format(breakingbad.n_pixels, breakingbad.n_true_pixels)
print 'breakingbad has {} elements (3 x n_pixels)'.format(breakingbad.n_elements)
vectorized_bad = breakingbad.as_vector()
print 'vector of breaking bad is of shape {}'.format(vectorized_bad.shape)

gradient()

  • all masked images can calculate there own gradient - the result is always a MaskedNDImage. Note that landmarks from the original image are persisted to the gradient. Also the nullify_values_at_mask_boundaries kwarg is useful for calculating gradients with masked data that has empty regions, such as in AAM's.

In [ ]:
grads = breakingbad.gradient(nullify_values_at_mask_boundaries=True)

In [ ]:
grads.view(channels=3, masked=False)